Your First AI application

Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.

In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using this dataset from Oxford of 102 flower categories, you can see a few examples below.

The project is broken down into multiple steps:

  • Load the image dataset and create a pipeline.
  • Build and Train an image classifier on this dataset.
  • Use your trained model to perform inference on flower images.

We'll lead you through each part which you'll implement in Python.

When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.

Import Resources

In [6]:
# The new version of dataset is only available in the tfds-nightly package.
%pip --no-cache-dir install tensorflow-datasets --user
# DON'T MISS TO RESTART THE KERNEL
Requirement already satisfied: tensorflow-datasets in /opt/conda/lib/python3.7/site-packages (1.2.0)
Requirement already satisfied: absl-py in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.8.1)
Requirement already satisfied: psutil in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (5.6.7)
Requirement already satisfied: dill in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.3.1.1)
Requirement already satisfied: protobuf>=3.6.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (3.11.2)
Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (2.22.0)
Requirement already satisfied: tensorflow-metadata in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.14.0)
Requirement already satisfied: future in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.18.2)
Requirement already satisfied: promise in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (2.2.1)
Requirement already satisfied: termcolor in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.1.0)
Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.17.4)
Requirement already satisfied: attrs in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (19.3.0)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.12.0)
Requirement already satisfied: wrapt in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.11.2)
Requirement already satisfied: tqdm in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (4.36.1)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow-datasets) (41.4.0)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (2019.11.28)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (1.24.2)
Requirement already satisfied: googleapis-common-protos in /opt/conda/lib/python3.7/site-packages (from tensorflow-metadata->tensorflow-datasets) (1.6.0)
Note: you may need to restart the kernel to use updated packages.
In [1]:
!pip install grpcio
!pip install tensorflow-gpu==2.0.0 --upgrade --user
Requirement already satisfied: grpcio in /opt/conda/lib/python3.7/site-packages (1.16.1)
Requirement already satisfied: six>=1.5.2 in /opt/conda/lib/python3.7/site-packages (from grpcio) (1.12.0)
Collecting tensorflow-gpu==2.0.0
  Downloading https://files.pythonhosted.org/packages/a1/eb/bc0784af18f612838f90419cf4805c37c20ddb957f5ffe0c42144562dcfa/tensorflow_gpu-2.0.0-cp37-cp37m-manylinux2010_x86_64.whl (380.8MB)
     |████████████████████████████████| 380.8MB 30kB/s s eta 0:00:01     |███████████████▏                | 180.2MB 52.5MB/s eta 0:00:04     |██████████████████▎             | 217.1MB 35.6MB/s eta 0:00:05
Requirement already satisfied, skipping upgrade: opt-einsum>=2.3.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (3.1.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (3.11.2)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied, skipping upgrade: six>=1.10.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.12.0)
Requirement already satisfied, skipping upgrade: astor>=0.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (0.8.0)
Requirement already satisfied, skipping upgrade: tensorflow-estimator<2.1.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (2.0.0)
Requirement already satisfied, skipping upgrade: termcolor>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied, skipping upgrade: tensorboard<2.1.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (2.0.0)
Requirement already satisfied, skipping upgrade: numpy<2.0,>=1.16.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.17.4)
Requirement already satisfied, skipping upgrade: wheel>=0.26 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (0.33.6)
Requirement already satisfied, skipping upgrade: gast==0.2.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (0.2.2)
Requirement already satisfied, skipping upgrade: grpcio>=1.8.6 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.16.1)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.8 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.0.8)
Requirement already satisfied, skipping upgrade: google-pasta>=0.1.6 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (0.1.8)
Requirement already satisfied, skipping upgrade: wrapt>=1.11.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (1.11.2)
Requirement already satisfied, skipping upgrade: absl-py>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-gpu==2.0.0) (0.8.1)
Requirement already satisfied, skipping upgrade: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow-gpu==2.0.0) (41.4.0)
Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.1.1)
Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.16.0)
Requirement already satisfied, skipping upgrade: h5py in /opt/conda/lib/python3.7/site-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0) (2.9.0)
Installing collected packages: tensorflow-gpu
  WARNING: The scripts saved_model_cli, tensorboard, tf_upgrade_v2, tflite_convert, toco and toco_from_protos are installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed tensorflow-gpu-2.0.0
In [2]:
# tfds versions after 3.2.1 require Tensorflow 2.1.0 --> might cause GPU availability to be 'false'
!pip install tensorflow-datasets==3.2.1 --upgrade --user
Collecting tensorflow-datasets==3.2.1
  Downloading https://files.pythonhosted.org/packages/ca/c9/d97bdf931edbae9aebc767633d088bd674136d5fe7587ef693b7cb6a1883/tensorflow_datasets-3.2.1-py3-none-any.whl (3.4MB)
     |████████████████████████████████| 3.4MB 4.8MB/s eta 0:00:01
Requirement already satisfied, skipping upgrade: dill in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (0.3.1.1)
Requirement already satisfied, skipping upgrade: tensorflow-metadata in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (0.14.0)
Requirement already satisfied, skipping upgrade: tqdm in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (4.36.1)
Requirement already satisfied, skipping upgrade: termcolor in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (1.1.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (3.11.2)
Requirement already satisfied, skipping upgrade: absl-py in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (0.8.1)
Requirement already satisfied, skipping upgrade: requests>=2.19.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (2.22.0)
Requirement already satisfied, skipping upgrade: numpy in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (1.17.4)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (1.12.0)
Requirement already satisfied, skipping upgrade: promise in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (2.2.1)
Requirement already satisfied, skipping upgrade: attrs>=18.1.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (19.3.0)
Requirement already satisfied, skipping upgrade: future in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (0.18.2)
Requirement already satisfied, skipping upgrade: wrapt in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets==3.2.1) (1.11.2)
Requirement already satisfied, skipping upgrade: googleapis-common-protos in /opt/conda/lib/python3.7/site-packages (from tensorflow-metadata->tensorflow-datasets==3.2.1) (1.6.0)
Requirement already satisfied, skipping upgrade: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow-datasets==3.2.1) (41.4.0)
Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets==3.2.1) (2.8)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets==3.2.1) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets==3.2.1) (2019.11.28)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets==3.2.1) (1.24.2)
Installing collected packages: tensorflow-datasets
Successfully installed tensorflow-datasets-3.2.1
In [2]:
# Import TensorFlow 
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
In [27]:
# TODO: Make all other necessary imports.
import numpy as np
import pandas as pd
import json
import time

%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt

Load the Dataset

Here you'll use tensorflow_datasets to load the Oxford Flowers 102 dataset. This dataset has 3 splits: 'train', 'test', and 'validation'. You'll also need to make sure the training data is normalized and resized to 224x224 pixels as required by the pre-trained networks.

The validation and testing sets are used to measure the model's performance on data it hasn't seen yet, but you'll still need to normalize and resize the images to the appropriate size.

In [4]:
# Download data to default local directory "~/tensorflow_datasets"
!python -m tensorflow_datasets.scripts.download_and_prepare --register_checksums=True --datasets=oxford_flowers102

# TODO: Load the dataset with TensorFlow Datasets. Hint: use tfds.load()
dataset = tfds.load('oxford_flowers102',  data_dir = "~/tensorflow_datasets", with_info=True)

# TODO: Create a training set, a validation set and a test set.
data, dataset_info = dataset
test = data.get('test')
train = data.get('train')
validate = data.get('validation')
I0514 22:48:29.698849 140053016581888 download_and_prepare.py:201] Running download_and_prepare for dataset(s):
oxford_flowers102
I0514 22:48:29.700363 140053016581888 dataset_info.py:358] Load dataset info from /root/tensorflow_datasets/oxford_flowers102/2.1.1
I0514 22:48:29.711022 140053016581888 download_and_prepare.py:139] download_and_prepare for dataset oxford_flowers102/2.1.1...
I0514 22:48:29.711339 140053016581888 dataset_builder.py:288] Reusing dataset oxford_flowers102 (/root/tensorflow_datasets/oxford_flowers102/2.1.1)
name: "oxford_flowers102"
description: "The Oxford Flowers 102 dataset is a consistent of 102 flower categories commonly occurring\nin the United Kingdom. Each class consists of between 40 and 258 images. The images have\nlarge scale, pose and light variations. In addition, there are categories that have large\nvariations within the category and several very similar categories.\n\nThe dataset is divided into a training set, a validation set and a test set.\nThe training set and validation set each consist of 10 images per class (totalling 1020 images each).\nThe test set consists of the remaining 6149 images (minimum 20 per class)."
citation: "@InProceedings{Nilsback08,\n   author = \"Nilsback, M-E. and Zisserman, A.\",\n   title = \"Automated Flower Classification over a Large Number of Classes\",\n   booktitle = \"Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing\",\n   year = \"2008\",\n   month = \"Dec\"\n}"
location {
  urls: "https://www.robots.ox.ac.uk/~vgg/data/flowers/102/"
}
schema {
  feature {
    name: "file_name"
    type: BYTES
    domain: "file_name"
    presence {
      min_fraction: 1.0
      min_count: 1
    }
    shape {
      dim {
        size: 1
      }
    }
  }
  feature {
    name: "image"
    type: BYTES
    presence {
      min_fraction: 1.0
      min_count: 1
    }
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
  feature {
    name: "label"
    type: INT
    presence {
      min_fraction: 1.0
      min_count: 1
    }
    shape {
      dim {
        size: 1
      }
    }
  }
  string_domain {
    name: "file_name"
    value: "image_08133.jpg"
    value: "image_08138.jpg"
    value: "image_08157.jpg"
    value: "image_08173.jpg"
    value: "image_08174.jpg"
    value: "image_08176.jpg"
    value: "image_08179.jpg"
    value: "image_08182.jpg"
    value: "image_08185.jpg"
    value: "image_08187.jpg"
  }
}
splits {
  name: "test"
  statistics {
    num_examples: 6149
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 6149
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            type: QUANTILES
          }
          tot_num_values: 6149
        }
        unique: 6149
        top_values {
          value: "image_08189.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08188.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08186.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08184.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08183.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08181.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08180.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08178.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08172.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08171.jpg"
          frequency: 1.0
        }
        avg_length: 15.0
        rank_histogram {
          buckets {
            label: "image_08189.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "image_08188.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "image_08186.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "image_08184.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "image_08183.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "image_08181.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "image_08180.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "image_08178.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "image_08172.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "image_08171.jpg"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "file_name"
      }
    }
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 6149
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            type: QUANTILES
          }
          tot_num_values: 6149
        }
        unique: 6147
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 2.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 2.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        avg_length: 42333.9453125
        rank_histogram {
          buckets {
            label: "__BYTES_VALUE__"
            sample_count: 2.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "__BYTES_VALUE__"
            sample_count: 2.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "image"
      }
    }
    features {
      num_stats {
        common_stats {
          num_non_missing: 6149
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 614.9
            }
            type: QUANTILES
          }
          tot_num_values: 6149
        }
        mean: 57.81362823223289
        std_dev: 26.889661542349568
        num_zeros: 20
        median: 60.0
        max: 101.0
        histograms {
          buckets {
            high_value: 10.1
            sample_count: 388.0019
          }
          buckets {
            low_value: 10.1
            high_value: 20.2
            sample_count: 388.00190000000003
          }
          buckets {
            low_value: 20.2
            high_value: 30.299999999999997
            sample_count: 394.1509
          }
          buckets {
            low_value: 30.299999999999997
            high_value: 40.4
            sample_count: 449.4919
          }
          buckets {
            low_value: 40.4
            high_value: 50.5
            sample_count: 855.3258999999999
          }
          buckets {
            low_value: 50.5
            high_value: 60.599999999999994
            sample_count: 621.6638999999999
          }
          buckets {
            low_value: 60.599999999999994
            high_value: 70.7
            sample_count: 418.74690000000004
          }
          buckets {
            low_value: 70.7
            high_value: 80.8
            sample_count: 1187.3719
          }
          buckets {
            low_value: 80.8
            high_value: 90.89999999999999
            sample_count: 812.2829
          }
          buckets {
            low_value: 90.89999999999999
            high_value: 101.0
            sample_count: 633.9619
          }
        }
        histograms {
          buckets {
            high_value: 16.0
            sample_count: 614.9
          }
          buckets {
            low_value: 16.0
            high_value: 33.0
            sample_count: 614.9
          }
          buckets {
            low_value: 33.0
            high_value: 44.0
            sample_count: 614.9
          }
          buckets {
            low_value: 44.0
            high_value: 50.0
            sample_count: 614.9
          }
          buckets {
            low_value: 50.0
            high_value: 60.0
            sample_count: 614.9
          }
          buckets {
            low_value: 60.0
            high_value: 72.0
            sample_count: 614.9
          }
          buckets {
            low_value: 72.0
            high_value: 76.0
            sample_count: 614.9
          }
          buckets {
            low_value: 76.0
            high_value: 83.0
            sample_count: 614.9
          }
          buckets {
            low_value: 83.0
            high_value: 91.0
            sample_count: 614.9
          }
          buckets {
            low_value: 91.0
            high_value: 101.0
            sample_count: 614.9
          }
          type: QUANTILES
        }
      }
      path {
        step: "label"
      }
    }
  }
  shard_lengths: 3074
  shard_lengths: 3075
  num_bytes: 260784877
}
splits {
  name: "train"
  statistics {
    num_examples: 1020
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        unique: 1020
        top_values {
          value: "image_08177.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08175.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08167.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08166.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08165.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08164.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08161.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08154.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08148.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08135.jpg"
          frequency: 1.0
        }
        avg_length: 15.0
        rank_histogram {
          buckets {
            label: "image_08177.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "image_08175.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "image_08167.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "image_08166.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "image_08165.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "image_08164.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "image_08161.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "image_08154.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "image_08148.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "image_08135.jpg"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "file_name"
      }
    }
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        unique: 1020
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        avg_length: 42545.15625
        rank_histogram {
          buckets {
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "image"
      }
    }
    features {
      num_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        mean: 50.5
        std_dev: 29.443448620476957
        num_zeros: 10
        median: 51.0
        max: 101.0
        histograms {
          buckets {
            high_value: 10.1
            sample_count: 109.242
          }
          buckets {
            low_value: 10.1
            high_value: 20.2
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 20.2
            high_value: 30.299999999999997
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 30.299999999999997
            high_value: 40.4
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 40.4
            high_value: 50.5
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 50.5
            high_value: 60.599999999999994
            sample_count: 101.08200000000001
          }
          buckets {
            low_value: 60.599999999999994
            high_value: 70.7
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 70.7
            high_value: 80.8
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 80.8
            high_value: 90.89999999999999
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 90.89999999999999
            high_value: 101.0
            sample_count: 109.242
          }
        }
        histograms {
          buckets {
            high_value: 10.0
            sample_count: 102.0
          }
          buckets {
            low_value: 10.0
            high_value: 20.0
            sample_count: 102.0
          }
          buckets {
            low_value: 20.0
            high_value: 30.0
            sample_count: 102.0
          }
          buckets {
            low_value: 30.0
            high_value: 40.0
            sample_count: 102.0
          }
          buckets {
            low_value: 40.0
            high_value: 51.0
            sample_count: 102.0
          }
          buckets {
            low_value: 51.0
            high_value: 61.0
            sample_count: 102.0
          }
          buckets {
            low_value: 61.0
            high_value: 71.0
            sample_count: 102.0
          }
          buckets {
            low_value: 71.0
            high_value: 81.0
            sample_count: 102.0
          }
          buckets {
            low_value: 81.0
            high_value: 91.0
            sample_count: 102.0
          }
          buckets {
            low_value: 91.0
            high_value: 101.0
            sample_count: 102.0
          }
          type: QUANTILES
        }
      }
      path {
        step: "label"
      }
    }
  }
  shard_lengths: 1020
  num_bytes: 43474584
}
splits {
  name: "validation"
  statistics {
    num_examples: 1020
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        unique: 1020
        top_values {
          value: "image_08187.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08185.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08182.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08179.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08176.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08174.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08173.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08157.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08138.jpg"
          frequency: 1.0
        }
        top_values {
          value: "image_08133.jpg"
          frequency: 1.0
        }
        avg_length: 15.0
        rank_histogram {
          buckets {
            label: "image_08187.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "image_08185.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "image_08182.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "image_08179.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "image_08176.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "image_08174.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "image_08173.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "image_08157.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "image_08138.jpg"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "image_08133.jpg"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "file_name"
      }
    }
    features {
      type: STRING
      string_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        unique: 1020
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        top_values {
          value: "__BYTES_VALUE__"
          frequency: 1.0
        }
        avg_length: 42256.625
        rank_histogram {
          buckets {
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 1
            high_rank: 1
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 2
            high_rank: 2
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 3
            high_rank: 3
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 4
            high_rank: 4
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 5
            high_rank: 5
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 6
            high_rank: 6
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 7
            high_rank: 7
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 8
            high_rank: 8
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
          buckets {
            low_rank: 9
            high_rank: 9
            label: "__BYTES_VALUE__"
            sample_count: 1.0
          }
        }
      }
      path {
        step: "image"
      }
    }
    features {
      num_stats {
        common_stats {
          num_non_missing: 1020
          min_num_values: 1
          max_num_values: 1
          avg_num_values: 1.0
          num_values_histogram {
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            buckets {
              low_value: 1.0
              high_value: 1.0
              sample_count: 102.0
            }
            type: QUANTILES
          }
          tot_num_values: 1020
        }
        mean: 50.5
        std_dev: 29.443448620476957
        num_zeros: 10
        median: 51.0
        max: 101.0
        histograms {
          buckets {
            high_value: 10.1
            sample_count: 109.242
          }
          buckets {
            low_value: 10.1
            high_value: 20.2
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 20.2
            high_value: 30.299999999999997
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 30.299999999999997
            high_value: 40.4
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 40.4
            high_value: 50.5
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 50.5
            high_value: 60.599999999999994
            sample_count: 101.08200000000001
          }
          buckets {
            low_value: 60.599999999999994
            high_value: 70.7
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 70.7
            high_value: 80.8
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 80.8
            high_value: 90.89999999999999
            sample_count: 100.06200000000001
          }
          buckets {
            low_value: 90.89999999999999
            high_value: 101.0
            sample_count: 109.242
          }
        }
        histograms {
          buckets {
            high_value: 10.0
            sample_count: 102.0
          }
          buckets {
            low_value: 10.0
            high_value: 20.0
            sample_count: 102.0
          }
          buckets {
            low_value: 20.0
            high_value: 30.0
            sample_count: 102.0
          }
          buckets {
            low_value: 30.0
            high_value: 40.0
            sample_count: 102.0
          }
          buckets {
            low_value: 40.0
            high_value: 51.0
            sample_count: 102.0
          }
          buckets {
            low_value: 51.0
            high_value: 61.0
            sample_count: 102.0
          }
          buckets {
            low_value: 61.0
            high_value: 71.0
            sample_count: 102.0
          }
          buckets {
            low_value: 71.0
            high_value: 81.0
            sample_count: 102.0
          }
          buckets {
            low_value: 81.0
            high_value: 91.0
            sample_count: 102.0
          }
          buckets {
            low_value: 91.0
            high_value: 101.0
            sample_count: 102.0
          }
          type: QUANTILES
        }
      }
      path {
        step: "label"
      }
    }
  }
  shard_lengths: 1020
  num_bytes: 43180278
}
supervised_keys {
  input: "image"
  output: "label"
}
version: "2.1.1"
download_size: 344878000

Explore the Dataset

In [5]:
n_test = dataset_info.splits['test'].num_examples
n_train = dataset_info.splits['train'].num_examples
n_validate = dataset_info.splits['validation'].num_examples
n_classes = dataset_info.features['label'].num_classes

# TODO: Get the number of examples in each set from the dataset info.
print(f"Number of training examples: {n_train}")
print(f"Number of validation examples: {n_validate}")
print(f"Number of testing examples: {n_test}")

# TODO: Get the number of classes in the dataset from the dataset info.
print(f"Number of classes: {n_classes}")
Number of training examples: 1020
Number of validation examples: 1020
Number of testing examples: 6149
Number of classes: 102
In [6]:
# TODO: Print the shape and corresponding label of 3 images in the training set.
for item in train.take(3):
    image = item['image']
    label = item['label']
    print(image.shape)
    print(label)
(500, 667, 3)
tf.Tensor(72, shape=(), dtype=int64)
(500, 666, 3)
tf.Tensor(84, shape=(), dtype=int64)
(670, 500, 3)
tf.Tensor(70, shape=(), dtype=int64)
In [7]:
# TODO: Plot 1 image from the training set. 
class_names = dataset_info.features['label'].names

# Set the title of the plot to the corresponding image label. 
plt.imshow(image)
plt.title(class_names[label.numpy()])
plt.show()

Label Mapping

You'll also need to load in a mapping from label to category name. You can find this in the file label_map.json. It's a JSON object which you can read in with the json module. This will give you a dictionary mapping the integer coded labels to the actual names of the flowers.

In [8]:
with open('label_map.json', 'r') as f:
    class_names = json.load(f)
    
print(class_names.keys())
dict_keys(['21', '3', '45', '1', '34', '27', '7', '16', '25', '26', '79', '39', '24', '67', '35', '32', '10', '6', '93', '33', '9', '102', '14', '19', '100', '13', '49', '15', '61', '31', '64', '68', '63', '69', '62', '20', '38', '4', '86', '101', '42', '22', '2', '54', '66', '70', '85', '99', '87', '5', '92', '28', '97', '57', '40', '47', '59', '48', '55', '36', '91', '29', '71', '90', '18', '98', '8', '30', '17', '52', '84', '12', '11', '96', '23', '50', '44', '53', '72', '65', '80', '76', '37', '56', '60', '82', '58', '75', '41', '95', '43', '83', '78', '88', '94', '81', '74', '89', '73', '46', '77', '51'])
In [9]:
# TODO: Plot 1 image from the training set. Set the title 
# of the plot to the corresponding class name. 
plt.imshow(image)
plt.title(class_names[str(label.numpy()+1)]) #The + 1 is because the keys in label_map are ordered starting at 1, not 0
plt.show()

Create Pipeline

In [10]:
# TODO: Create a pipeline for each set.
batch_size = 32
image_size =224

def format_image(item):
    image = tf.image.resize(tf.cast(item['image'], tf.float32), (image_size, image_size)) / 255
    label = item['label']
    return image, label

training_batches = train.shuffle(n_train//4).map(format_image).batch(batch_size).prefetch(1)
validation_batches = validate.map(format_image).batch(batch_size).prefetch(1)
testing_batches = test.map(format_image).batch(batch_size).prefetch(1)

Build and Train the Classifier

Now that the data is ready, it's time to build and train the classifier. You should use the MobileNet pre-trained model from TensorFlow Hub to get the image features. Build and train a new feed-forward classifier using those features.

We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students!

Refer to the rubric for guidance on successfully completing this section. Things you'll need to do:

  • Load the MobileNet pre-trained network from TensorFlow Hub.
  • Define a new, untrained feed-forward network as a classifier.
  • Train the classifier.
  • Plot the loss and accuracy values achieved during training for the training and validation set.
  • Save your trained model as a Keras model.

We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!

When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right.

Note for Workspace users: One important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. Also, If your model is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with ls -lh), you should reduce the size of your hidden layers and train again.

In [11]:
# TODO: Build and train your network.

URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"

feature_extractor = hub.KerasLayer(URL, input_shape=(image_size, image_size,3))

feature_extractor.trainable = False

model = tf.keras.Sequential([
        feature_extractor,
        tf.keras.layers.Dense(n_classes, activation = 'softmax')
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
keras_layer (KerasLayer)     (None, 1280)              2257984   
_________________________________________________________________
dense (Dense)                (None, 102)               130662    
=================================================================
Total params: 2,388,646
Trainable params: 130,662
Non-trainable params: 2,257,984
_________________________________________________________________
In [12]:
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

history = model.fit(training_batches,
                    epochs = 100,
                    validation_data=validation_batches, 
                    callbacks = [tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)])
Epoch 1/100
32/32 [==============================] - 13s 404ms/step - loss: 4.2424 - accuracy: 0.1147 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/100
32/32 [==============================] - 6s 182ms/step - loss: 2.0812 - accuracy: 0.6814 - val_loss: 1.9915 - val_accuracy: 0.6559
Epoch 3/100
32/32 [==============================] - 6s 182ms/step - loss: 1.0912 - accuracy: 0.9088 - val_loss: 1.5351 - val_accuracy: 0.7206
Epoch 4/100
32/32 [==============================] - 6s 181ms/step - loss: 0.6574 - accuracy: 0.9676 - val_loss: 1.2931 - val_accuracy: 0.7657
Epoch 5/100
32/32 [==============================] - 6s 182ms/step - loss: 0.4389 - accuracy: 0.9843 - val_loss: 1.1647 - val_accuracy: 0.7784
Epoch 6/100
32/32 [==============================] - 6s 178ms/step - loss: 0.3176 - accuracy: 0.9941 - val_loss: 1.0704 - val_accuracy: 0.7931
Epoch 7/100
32/32 [==============================] - 6s 181ms/step - loss: 0.2328 - accuracy: 0.9971 - val_loss: 1.0083 - val_accuracy: 0.8010
Epoch 8/100
32/32 [==============================] - 6s 182ms/step - loss: 0.1827 - accuracy: 0.9980 - val_loss: 0.9659 - val_accuracy: 0.7990
Epoch 9/100
32/32 [==============================] - 6s 181ms/step - loss: 0.1447 - accuracy: 0.9990 - val_loss: 0.9294 - val_accuracy: 0.8010
Epoch 10/100
32/32 [==============================] - 6s 178ms/step - loss: 0.1201 - accuracy: 1.0000 - val_loss: 0.8992 - val_accuracy: 0.8059
Epoch 11/100
32/32 [==============================] - 6s 181ms/step - loss: 0.0993 - accuracy: 1.0000 - val_loss: 0.8801 - val_accuracy: 0.8059
Epoch 12/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0848 - accuracy: 1.0000 - val_loss: 0.8594 - val_accuracy: 0.8020
Epoch 13/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0739 - accuracy: 1.0000 - val_loss: 0.8414 - val_accuracy: 0.8098
Epoch 14/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0646 - accuracy: 1.0000 - val_loss: 0.8306 - val_accuracy: 0.8118
Epoch 15/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0573 - accuracy: 1.0000 - val_loss: 0.8174 - val_accuracy: 0.8078
Epoch 16/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.8049 - val_accuracy: 0.8118
Epoch 17/100
32/32 [==============================] - 6s 180ms/step - loss: 0.0458 - accuracy: 1.0000 - val_loss: 0.7980 - val_accuracy: 0.8108
Epoch 18/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0409 - accuracy: 1.0000 - val_loss: 0.7886 - val_accuracy: 0.8108
Epoch 19/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0375 - accuracy: 1.0000 - val_loss: 0.7819 - val_accuracy: 0.8176
Epoch 20/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0345 - accuracy: 1.0000 - val_loss: 0.7740 - val_accuracy: 0.8167
Epoch 21/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0316 - accuracy: 1.0000 - val_loss: 0.7678 - val_accuracy: 0.8127
Epoch 22/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0291 - accuracy: 1.0000 - val_loss: 0.7642 - val_accuracy: 0.8167
Epoch 23/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0270 - accuracy: 1.0000 - val_loss: 0.7578 - val_accuracy: 0.8167
Epoch 24/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0253 - accuracy: 1.0000 - val_loss: 0.7526 - val_accuracy: 0.8176
Epoch 25/100
32/32 [==============================] - 6s 181ms/step - loss: 0.0234 - accuracy: 1.0000 - val_loss: 0.7484 - val_accuracy: 0.8196
Epoch 26/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0220 - accuracy: 1.0000 - val_loss: 0.7439 - val_accuracy: 0.8176
Epoch 27/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0207 - accuracy: 1.0000 - val_loss: 0.7406 - val_accuracy: 0.8186
Epoch 28/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0193 - accuracy: 1.0000 - val_loss: 0.7366 - val_accuracy: 0.8206
Epoch 29/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0182 - accuracy: 1.0000 - val_loss: 0.7335 - val_accuracy: 0.8216
Epoch 30/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0171 - accuracy: 1.0000 - val_loss: 0.7308 - val_accuracy: 0.8186
Epoch 31/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0163 - accuracy: 1.0000 - val_loss: 0.7274 - val_accuracy: 0.8206
Epoch 32/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0153 - accuracy: 1.0000 - val_loss: 0.7246 - val_accuracy: 0.8206
Epoch 33/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0148 - accuracy: 1.0000 - val_loss: 0.7221 - val_accuracy: 0.8225
Epoch 34/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0138 - accuracy: 1.0000 - val_loss: 0.7193 - val_accuracy: 0.8225
Epoch 35/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0133 - accuracy: 1.0000 - val_loss: 0.7169 - val_accuracy: 0.8206
Epoch 36/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0126 - accuracy: 1.0000 - val_loss: 0.7144 - val_accuracy: 0.8235
Epoch 37/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0120 - accuracy: 1.0000 - val_loss: 0.7135 - val_accuracy: 0.8235
Epoch 38/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0115 - accuracy: 1.0000 - val_loss: 0.7105 - val_accuracy: 0.8235
Epoch 39/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0109 - accuracy: 1.0000 - val_loss: 0.7089 - val_accuracy: 0.8216
Epoch 40/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0105 - accuracy: 1.0000 - val_loss: 0.7065 - val_accuracy: 0.8235
Epoch 41/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0100 - accuracy: 1.0000 - val_loss: 0.7050 - val_accuracy: 0.8235
Epoch 42/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0096 - accuracy: 1.0000 - val_loss: 0.7031 - val_accuracy: 0.8245
Epoch 43/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0092 - accuracy: 1.0000 - val_loss: 0.7017 - val_accuracy: 0.8245
Epoch 44/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0088 - accuracy: 1.0000 - val_loss: 0.7002 - val_accuracy: 0.8245
Epoch 45/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0086 - accuracy: 1.0000 - val_loss: 0.6989 - val_accuracy: 0.8245
Epoch 46/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0082 - accuracy: 1.0000 - val_loss: 0.6974 - val_accuracy: 0.8255
Epoch 47/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0080 - accuracy: 1.0000 - val_loss: 0.6964 - val_accuracy: 0.8245
Epoch 48/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0076 - accuracy: 1.0000 - val_loss: 0.6949 - val_accuracy: 0.8255
Epoch 49/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0073 - accuracy: 1.0000 - val_loss: 0.6936 - val_accuracy: 0.8255
Epoch 50/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0071 - accuracy: 1.0000 - val_loss: 0.6922 - val_accuracy: 0.8255
Epoch 51/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0069 - accuracy: 1.0000 - val_loss: 0.6908 - val_accuracy: 0.8235
Epoch 52/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0066 - accuracy: 1.0000 - val_loss: 0.6898 - val_accuracy: 0.8235
Epoch 53/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0064 - accuracy: 1.0000 - val_loss: 0.6889 - val_accuracy: 0.8245
Epoch 54/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0062 - accuracy: 1.0000 - val_loss: 0.6879 - val_accuracy: 0.8275
Epoch 55/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0059 - accuracy: 1.0000 - val_loss: 0.6868 - val_accuracy: 0.8265
Epoch 56/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0058 - accuracy: 1.0000 - val_loss: 0.6863 - val_accuracy: 0.8255
Epoch 57/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0056 - accuracy: 1.0000 - val_loss: 0.6852 - val_accuracy: 0.8265
Epoch 58/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.6841 - val_accuracy: 0.8235
Epoch 59/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0052 - accuracy: 1.0000 - val_loss: 0.6827 - val_accuracy: 0.8245
Epoch 60/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0050 - accuracy: 1.0000 - val_loss: 0.6824 - val_accuracy: 0.8255
Epoch 61/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 0.6806 - val_accuracy: 0.8265
Epoch 63/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0046 - accuracy: 1.0000 - val_loss: 0.6797 - val_accuracy: 0.8284
Epoch 64/100
32/32 [==============================] - 6s 178ms/step - loss: 0.0045 - accuracy: 1.0000 - val_loss: 0.6789 - val_accuracy: 0.8255
Epoch 65/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 0.6782 - val_accuracy: 0.8265
Epoch 66/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0043 - accuracy: 1.0000 - val_loss: 0.6780 - val_accuracy: 0.8245
Epoch 67/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 0.6771 - val_accuracy: 0.8265
Epoch 68/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0040 - accuracy: 1.0000 - val_loss: 0.6762 - val_accuracy: 0.8245
Epoch 69/100
32/32 [==============================] - 6s 181ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.6755 - val_accuracy: 0.8275
Epoch 70/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0038 - accuracy: 1.0000 - val_loss: 0.6754 - val_accuracy: 0.8245
Epoch 71/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 0.6744 - val_accuracy: 0.8284
Epoch 72/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0036 - accuracy: 1.0000 - val_loss: 0.6741 - val_accuracy: 0.8265
Epoch 73/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0035 - accuracy: 1.0000 - val_loss: 0.6735 - val_accuracy: 0.8255
Epoch 74/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0034 - accuracy: 1.0000 - val_loss: 0.6727 - val_accuracy: 0.8275
Epoch 75/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0033 - accuracy: 1.0000 - val_loss: 0.6724 - val_accuracy: 0.8255
Epoch 76/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.6717 - val_accuracy: 0.8255
Epoch 77/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.6712 - val_accuracy: 0.8255
Epoch 78/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 0.6708 - val_accuracy: 0.8255
Epoch 79/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0030 - accuracy: 1.0000 - val_loss: 0.6702 - val_accuracy: 0.8265
Epoch 80/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.6695 - val_accuracy: 0.8275
Epoch 81/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.6693 - val_accuracy: 0.8275
Epoch 82/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.6683 - val_accuracy: 0.8265
Epoch 83/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.6686 - val_accuracy: 0.8255
Epoch 84/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.6679 - val_accuracy: 0.8265
Epoch 85/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.6679 - val_accuracy: 0.8255
Epoch 86/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.6673 - val_accuracy: 0.8255
Epoch 87/100
32/32 [==============================] - 6s 173ms/step - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.6671 - val_accuracy: 0.8235
Epoch 88/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.6665 - val_accuracy: 0.8265
Epoch 89/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.6661 - val_accuracy: 0.8265
Epoch 90/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.6660 - val_accuracy: 0.8255
Epoch 91/100
32/32 [==============================] - 6s 179ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.6658 - val_accuracy: 0.8245
Epoch 92/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.6650 - val_accuracy: 0.8255
Epoch 93/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.6646 - val_accuracy: 0.8265
Epoch 94/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.6646 - val_accuracy: 0.8255
Epoch 95/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.6647 - val_accuracy: 0.8235
Epoch 96/100
32/32 [==============================] - 6s 177ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.6638 - val_accuracy: 0.8255
Epoch 97/100
32/32 [==============================] - 6s 175ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.6634 - val_accuracy: 0.8265
Epoch 98/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.6633 - val_accuracy: 0.8255
Epoch 99/100
32/32 [==============================] - 6s 176ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.6627 - val_accuracy: 0.8245
Epoch 100/100
32/32 [==============================] - 6s 174ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.6626 - val_accuracy: 0.8275
In [13]:
# TODO: Plot the loss and accuracy values achieved during training for the training and validation set.
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']

training_loss = history.history['loss']
validation_loss = history.history['val_loss']

epochs_range=range(100)

plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, training_accuracy, label='Training Accuracy')
plt.plot(epochs_range, validation_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, training_loss, label='Training Loss')
plt.plot(epochs_range, validation_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

Testing your Network

It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. You should be able to reach around 70% accuracy on the test set if the model has been trained well.

In [25]:
# TODO: Print the loss and accuracy values achieved on the entire test set.
loss, accuracy = model.evaluate(testing_batches)

print('\nLoss on the TEST Set: {:,.3f}'.format(loss))
print('Accuracy on the TEST Set: {:.3%}'.format(accuracy))
193/193 [==============================] - 16s 85ms/step - loss: 0.8052 - accuracy: 0.7917

Loss on the TEST Set: 0.805
Accuracy on the TEST Set: 79.167%
In [26]:
predictions = model.predict(testing_batches)

for image_batch, label_batch in testing_batches.take(1):
    test_images = image_batch.numpy().squeeze()
    test_labels = label_batch.numpy()

plt.figure(figsize=(10,15))

for n in range(30):
    plt.subplot(6,5,n+1)
    plt.imshow(test_images[n], cmap = plt.cm.binary)
    color = 'green' if np.argmax(predictions[n]) == test_labels[n] else 'red'
    plt.title(class_names[str(np.argmax(predictions[n])+1)], color=color)
    plt.axis('off')

Save the Model

Now that your network is trained, save the model so you can load it later for making inference. In the cell below save your model as a Keras model (i.e. save it as an HDF5 file).

In [28]:
# TODO: Save your trained model as a Keras model.

t = time.time()

path = './{}.h5'.format(int(t))

model.save(path)
In [32]:
t = time.time()

path = './{}.h5'.format(int(t))

tf.keras.experimental.export_saved_model(model, path)
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: ['train']
INFO:tensorflow:Signatures INCLUDED in export for Train: ['train']
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: ./1621034604.h5/saved_model.pb
INFO:tensorflow:SavedModel written to: ./1621034604.h5/saved_model.pb

Load the Keras Model

Load the Keras model you saved above.

In [33]:
# TODO: Load the Keras model
saved_path = './1621034604.h5'

loaded_model = tf.keras.experimental.load_from_saved_model(saved_path, custom_objects={'KerasLayer':hub.KerasLayer})
loaded_model.build((None, 224, 224, 3))

loaded_model.summary()
WARNING:tensorflow:From <ipython-input-33-40752e84af2b>:4: load_from_saved_model (from tensorflow.python.keras.saving.saved_model_experimental) is deprecated and will be removed in a future version.
Instructions for updating:
The experimental save and load functions have been  deprecated. Please switch to `tf.keras.models.load_model`.
WARNING:tensorflow:From <ipython-input-33-40752e84af2b>:4: load_from_saved_model (from tensorflow.python.keras.saving.saved_model_experimental) is deprecated and will be removed in a future version.
Instructions for updating:
The experimental save and load functions have been  deprecated. Please switch to `tf.keras.models.load_model`.
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
keras_layer (KerasLayer)     (None, 1280)              2257984   
_________________________________________________________________
dense (Dense)                (None, 102)               130662    
=================================================================
Total params: 2,388,646
Trainable params: 130,662
Non-trainable params: 2,257,984
_________________________________________________________________

Inference for Classification

Now you'll write a function that uses your trained network for inference. Write a function called predict that takes an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:

probs, classes = predict(image_path, model, top_k)

If top_k=5 the output of the predict function should be something like this:

probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163  0.01541934  0.01452626  0.01443549  0.01407339]
> ['70', '3', '45', '62', '55']

Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.

The predict function will also need to handle pre-processing the input image such that it can be used by your model. We recommend you write a separate function called process_image that performs the pre-processing. You can then call the process_image function from the predict function.

Image Pre-processing

The process_image function should take in an image (in the form of a NumPy array) and return an image in the form of a NumPy array with shape (224, 224, 3).

First, you should convert your image into a TensorFlow Tensor and then resize it to the appropriate size using tf.image.resize.

Second, the pixel values of the input images are typically encoded as integers in the range 0-255, but the model expects the pixel values to be floats in the range 0-1. Therefore, you'll also need to normalize the pixel values.

Finally, convert your image back to a NumPy array using the .numpy() method.

In [36]:
# TODO: Create the process_image function
def process_image(image):
    tf_image = tf.convert_to_tensor(image)
    tf_image = tf.image.resize(tf_image, (224, 224))
    tf_image = tf.cast(tf_image, tf.float32)
    tf_image /= 255.
    return tf_image.numpy()

To check your process_image function we have provided 4 images in the ./test_images/ folder:

  • cautleya_spicata.jpg
  • hard-leaved_pocket_orchid.jpg
  • orange_dahlia.jpg
  • wild_pansy.jpg

The code below loads one of the above images using PIL and plots the original image alongside the image produced by your process_image function. If your process_image function works, the plotted image should be the correct size.

In [37]:
from PIL import Image

image_path = './test_images/hard-leaved_pocket_orchid.jpg'
im = Image.open(image_path)
test_image = np.asarray(im)

processed_test_image = process_image(test_image)

fig, (ax1, ax2) = plt.subplots(figsize=(10,10), ncols=2)
ax1.imshow(test_image)
ax1.set_title('Original Image')
ax2.imshow(processed_test_image)
ax2.set_title('Processed Image')
plt.tight_layout()
plt.show()

Once you can get images in the correct format, it's time to write the predict function for making inference with your model.

Inference

Remember, the predict function should take an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:

probs, classes = predict(image_path, model, top_k)

If top_k=5 the output of the predict function should be something like this:

probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163  0.01541934  0.01452626  0.01443549  0.01407339]
> ['70', '3', '45', '62', '55']

Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.

Note: The image returned by the process_image function is a NumPy array with shape (224, 224, 3) but the model expects the input images to be of shape (1, 224, 224, 3). This extra dimension represents the batch size. We suggest you use the np.expand_dims() function to add the extra dimension.

In [45]:
# TODO: Create the predict function
def predict(image_path, model, top_k):
    image = Image.open(image_path)
    processed_image = process_image(np.asarray(image))
    
    expanded_image = np.expand_dims(processed_image, axis=0)
    prediction = model.predict(expanded_image)[0]
    int_labels = np.flip(np.argsort(prediction)) # gets the integer labels (0 to 101) from highest to lowest priority
    int_labels = int_labels[:top_k] # gets the highest top_k integer labels from highest to top_k-th highest priority
    classes = [str(i+1) for i in int_labels]
    
    probs = np.take(prediction, int_labels)
    return probs, classes

Sanity Check

It's always good to check the predictions made by your model to make sure they are correct. To check your predictions we have provided 4 images in the ./test_images/ folder:

  • cautleya_spicata.jpg
  • hard-leaved_pocket_orchid.jpg
  • orange_dahlia.jpg
  • wild_pansy.jpg

In the cell below use matplotlib to plot the input image alongside the probabilities for the top 5 classes predicted by your model. Plot the probabilities as a bar graph. The plot should look like this:

You can convert from the class integer labels to actual flower names using class_names.

In [61]:
# TODO: Plot the input image along with the top 5 classes
#print(predict('./test_images/cautleya_spicata.jpg', loaded_model, 5))

def plot_prediction(image_path, image_name, model, top_k):
    my_image = Image.open(image_path)
    probs, classes = predict(image_path, model, top_k)
    class_labels = [class_names[label] for label in classes]
    
    fig, (ax1, ax2) = plt.subplots(figsize=(9,9), ncols=2)
    ax1.imshow(my_image, cmap = plt.cm.binary)
    ax1.axis('off')
    ax1.set_title(image_name)
    ax2.barh(np.arange(top_k), probs)
    ax2.set_aspect(0.1)
    ax2.set_yticks(np.arange(top_k))
    ax2.set_yticklabels(class_labels, size='medium');
    ax2.set_title('Class Probability')
    ax2.set_xlim(0, 1.1)
    plt.tight_layout()
In [62]:
plot_prediction('./test_images/cautleya_spicata.jpg', 'cautleya spicata', loaded_model, 5)
In [63]:
plot_prediction('./test_images/hard-leaved_pocket_orchid.jpg', 'hard-leaved pocket orchid', loaded_model, 5)
In [64]:
plot_prediction('./test_images/orange_dahlia.jpg', 'orange dahlia', loaded_model, 5)
In [ ]:
plot_prediction('./test_images/wild_pansy.jpg', 'wild pansy', loaded_model, 5)